45 research outputs found

    The role of ongoing dendritic oscillations in single-neuron dynamics

    Get PDF
    The dendritic tree contributes significantly to the elementary computations a neuron performs while converting its synaptic inputs into action potential output. Traditionally, these computations have been characterized as temporally local, near-instantaneous mappings from the current input of the cell to its current output, brought about by somatic summation of dendritic contributions that are generated in spatially localized functional compartments. However, recent evidence about the presence of oscillations in dendrites suggests a qualitatively different mode of operation: the instantaneous phase of such oscillations can depend on a long history of inputs, and under appropriate conditions, even dendritic oscillators that are remote may interact through synchronization. Here, we develop a mathematical framework to analyze the interactions of local dendritic oscillations, and the way these interactions influence single cell computations. Combining weakly coupled oscillator methods with cable theoretic arguments, we derive phase-locking states for multiple oscillating dendritic compartments. We characterize how the phase-locking properties depend on key parameters of the oscillating dendrite: the electrotonic properties of the (active) dendritic segment, and the intrinsic properties of the dendritic oscillators. As a direct consequence, we show how input to the dendrites can modulate phase-locking behavior and hence global dendritic coherence. In turn, dendritic coherence is able to gate the integration and propagation of synaptic signals to the soma, ultimately leading to an effective control of somatic spike generation. Our results suggest that dendritic oscillations enable the dendritic tree to operate on more global temporal and spatial scales than previously thought

    Homeostatic Scaling of Excitability in Recurrent Neural Networks

    Get PDF
    Neurons adjust their intrinsic excitability when experiencing a persistent change in synaptic drive. This process can prevent neural activity from moving into either a quiescent state or a saturated state in the face of ongoing plasticity, and is thought to promote stability of the network in which neurons reside. However, most neurons are embedded in recurrent networks, which require a delicate balance between excitation and inhibition to maintain network stability. This balance could be disrupted when neurons independently adjust their intrinsic excitability. Here, we study the functioning of activity-dependent homeostatic scaling of intrinsic excitability (HSE) in a recurrent neural network. Using both simulations of a recurrent network consisting of excitatory and inhibitory neurons that implement HSE, and a mean-field description of adapting excitatory and inhibitory populations, we show that the stability of such adapting networks critically depends on the relationship between the adaptation time scales of both neuron populations. In a stable adapting network, HSE can keep all neurons functioning within their dynamic range, while the network is undergoing several (patho)physiologically relevant types of plasticity, such as persistent changes in external drive, changes in connection strengths, or the loss of inhibitory cells from the network. However, HSE cannot prevent the unstable network dynamics that result when, due to such plasticity, recurrent excitation in the network becomes too strong compared to feedback inhibition. This suggests that keeping a neural network in a stable and functional state requires the coordination of distinct homeostatic mechanisms that operate not only by adjusting neural excitability, but also by controlling network connectivity

    26th Annual Computational Neuroscience Meeting (CNS*2017): Part 3 - Meeting Abstracts - Antwerp, Belgium. 15–20 July 2017

    Get PDF
    This work was produced as part of the activities of FAPESP Research,\ud Disseminations and Innovation Center for Neuromathematics (grant\ud 2013/07699-0, S. Paulo Research Foundation). NLK is supported by a\ud FAPESP postdoctoral fellowship (grant 2016/03855-5). ACR is partially\ud supported by a CNPq fellowship (grant 306251/2014-0)

    A cellular mechanism for system memory consolidation

    No full text
    Declarative memories initially depend on the hippocampus. Over a period of weeks to years, however, these memories become hippocampus-independent through a process called system memory consolidation. The underlying cellular mechanisms are unclear. Here, we suggest a consolidation mechanism, which is based on STDP and a ubiquitous anatomical network motif. As a first step in the memory consolidation process, we consider pyramidal neurons in the hippocampal CA1 area. These cells receive Schaffer collateral (SC) input from the CA3 area at the proximal dendrites, and perforant path (PP) input from entorhinal cortex at the distal dendrites. Both pathways carry sensory information that has been processed by cortical networks and that enters the hippocampus through the entorhinal cortex. Hence, information from entorhinal cortex reaches CA1 cells through an indirect pathway (via CA3 and SC) and a direct pathway (PP). Memories are assumed to be initially stored in the recurrent CA3 network and the SC synapses during the awake, exploratory state. During a subsequent consolidation phase (during slow-wave sleep) SC-dependent memories are partly transferred to the PP synapses. Through mathematical analysis and numerical simulations we show that this consolidation process occurs as a natural result from the combination of (1) STDP at PP synapses and (2) the temporal correlations between SC and PP activities, since the (indirect) SC input is delayed compared to the (direct) PP input by about 5-10 ms. With a detailed compartmental model we then show that the spatial tuning of a CA1 cell is copied from the proximal SC-synaptic inputs to the distal PP-inputs. Next, we repeated the network motif across many levels in a hierarchical network model: each direct connection at one level is part of the indirect pathway of the next level. Analysis and simulations of this hierarchical system demonstrate that memories gradually move from hippocampus into neocortex. Moreover, the memories show power-law forgetting, as seen with psychophysical forgetting functions. Hence, our work proposes a novel mechanism to underlie system memory consolidation, allowing us to bridge spatial scales from single cells to cortical areas, and time scales from milliseconds to years

    Function and energy consumption constrain neuronal biophysics in a canonical computation: Coincidence detection.

    No full text
    Neural morphology and membrane properties vary greatly between cell types in the nervous system. The computations and local circuit connectivity that neurons support are thought to be the key factors constraining the cells' biophysical properties. Nevertheless, additional constraints can be expected to further shape neuronal design. Here, we focus on a particularly energy-intense system (as indicated by metabolic markers): principal neurons in the medial superior olive (MSO) nucleus of the auditory brainstem. Based on a modeling approach, we show that a trade-off between the level of performance of a functionally relevant computation and energy consumption predicts optimal ranges for cell morphology and membrane properties. The biophysical parameters appear most strongly constrained by functional needs, while energy use is minimized as long as function can be maintained. The key factors that determine model performance and energy consumption are 1) the saturation of the synaptic conductance input and 2) the temporal resolution of the postsynaptic signals as they reach the soma, which is largely determined by active membrane properties. MSO cells seem to operate close to pareto optimality, i.e., the trade-off boundary between performance and energy consumption that is formed by the set of optimal models. Good performance for drastically lower costs could in theory be achieved by small neurons without dendrites, as seen in the avian auditory system, pointing to additional constraints for mammalian MSO cells, including their circuit connectivity

    Soma-axon coupling configurations that enhance neuronal coincidence detection.

    Get PDF
    Coincidence detector neurons transmit timing information by responding preferentially to concurrent synaptic inputs. Principal cells of the medial superior olive (MSO) in the mammalian auditory brainstem are superb coincidence detectors. They encode sound source location with high temporal precision, distinguishing submillisecond timing differences among inputs. We investigate computationally how dynamic coupling between the input region (soma and dendrite) and the spike-generating output region (axon and axon initial segment) can enhance coincidence detection in MSO neurons. To do this, we formulate a two-compartment neuron model and characterize extensively coincidence detection sensitivity throughout a parameter space of coupling configurations. We focus on the interaction between coupling configuration and two currents that provide dynamic, voltage-gated, negative feedback in subthreshold voltage range: sodium current with rapid inactivation and low-threshold potassium current, IKLT. These currents reduce synaptic summation and can prevent spike generation unless inputs arrive with near simultaneity. We show that strong soma-to-axon coupling promotes the negative feedback effects of sodium inactivation and is, therefore, advantageous for coincidence detection. Furthermore, the feedforward combination of strong soma-to-axon coupling and weak axon-to-soma coupling enables spikes to be generated efficiently (few sodium channels needed) and with rapid recovery that enhances high-frequency coincidence detection. These observations detail the functional benefit of the strongly feedforward configuration that has been observed in physiological studies of MSO neurons. We find that IKLT further enhances coincidence detection sensitivity, but with effects that depend on coupling configuration. For instance, in models with weak soma-to-axon and weak axon-to-soma coupling, IKLT in the axon enhances coincidence detection more effectively than IKLT in the soma. By using a minimal model of soma-to-axon coupling, we connect structure, dynamics, and computation. Although we consider the particular case of MSO coincidence detectors, our method for creating and exploring a parameter space of two-compartment models can be applied to other neurons

    Requirements for a single cell mechanism of entorhinal "grid field" activity: role of dendritic oscillators and coupling

    No full text
    The responses of rat medial entorhinal cortical neurons form characteristic grid patterns as a function of the animal’s position. A recent model of grid fields proposes a mechanism based on intrinsic single cell properties. It relies on interference patterns emerging from multiple distinct and independent oscillations maintained in the dendritic tree of the cell. Here we examine the requirements necessary to implement this idealized mechanism in a biophysically realistic model. We find that appropriate grid field-formation by a single cell is exquisitely sensitive to intra-dendritic interactions. Mathematical analysis shows how these effects depend on properties of the dendritic oscillators and the (active) membrane segments that connect them. We provide requirements on the ion channel distributions that would be necessary for grid-fields. We implement these requirements in a compartmental model of a spiny stellate cell. We find that with realistic cell properties the intra-dendritic coupling is insufficiently weak to maintain grid field activity. Rather, the cell acts as a single oscillator as opposed to maintaining several independent oscillators. This work gives explicit requirements for a single cell implementation of grid-field activity and hints at a possible circuit level origin for grid pattern formation

    Stable and unstable dynamics of a recurrent network showing HSE.

    No full text
    <p>A: Schematic of the network model with e-cells (white circles) and i-cells (red circles) receiving input via recurrent network connections and via excitatory projections from outside the network. B: HSE mechanism operates by shifting the neural response function. The firing rate of an e-cell receiving DC current input is shown (solid line). An increase of the membrane conductance leads to a rightward shift of the response function (dashed line), making the cell less excitable. Decreasing results in a leftward shift of the response function (dotted line), increasing the cell's excitability. C–D: Response of a recurrent network with HSE to a persistent increase in the external drive (top trace) by 50%. In C the i-cells adapt 2.5 times more slowly than the e-cells (). In D both cell types adapt on the same time scale (). Top panels show the population mean of the instantaneous firing rates of the e-cells (black) and i-cells (red), computed for 1 second bins. Bottom panels show population mean of of the e-cells (black) and i-cells (red). The compound external drive to e-cells and i-cells before the input increase is 1.2 kHz.</p
    corecore